HDFS Storage

 How files are stored(1)

 
 
How files are stored(2)

 

 How files are stored(3)

 
  How files are stored(4)

 

  How files are stored(5)
 
  How files are stored(6)
 

 

  How files are stored(8)

 
  How files are stored(9)

 

 How files are stored(10)
 

When you are reading 1TB of data in the normal file system, internally block by block reading is happened. In this case, at a time it is accessing a block which is of 4KB, so it needs more disk seek time to read complete data which reduce the performance of the system. When you are reading 1TB of data in hadoop distributed file system, here reading a block(s) is happened in parallel. In this case at at time we are accessing the block which is of 64MB/128MB, So it needs less disk seek time to read the complete data which shows a better performance.In normal file systems, the block size may be 1KB or 4KB size. But in HDFS, the blocks size can be 64MB,128MB, 256MB..etc.By Default size 64 MB in Hadoop 1.x and 128 MB in 2.x.

No comments:

Post a Comment